75 research outputs found

    Verification of the Parallel Pin-Wise Core Simulator pCTF/PARCSv3.2 in Operational Control Rod Drop Transient Scenarios

    Full text link
    This is an Accepted Manuscript of an article published by Taylor & Francis in Nuclear Science and Engineering on 2017, available online: https://www.tandfonline.com/doi/full/10.1080/00295639.2017.1320892[EN] Thanks to advances in computer technology, it is feasible to obtain detailed reactor core descriptions for safety analysis of the light water reactor (LWR), in order to represent realistically the fuel elements design, as is the case for three-dimensional coupled simulations for local neutron kinetics and thermal hydraulics. This scenario requires an efficient thermal-hydraulic code that can produce a response in a reasonable time for large-scale, detailed models. In two-fluid codes, such as the thermal-hydraulic subchannel code COBRA-TF, the time restriction is even more important, since the set of equations to be solved is more complex. We have developed a message passing interface parallel version of COBRA-TF, called pCTF. The parallel code is based on a cell-oriented domain decomposition approach, and performs well in models that consist of many cells. The Jacobian matrix is computed in parallel, with each processor in charge of calculating the coefficients related to a subset of the cells. Furthermore, the resulting system of linear equations is also solved in parallel, by exploiting solvers and preconditioners from PETSc. The goal of this study is to demonstrate the capability of the recently developed pCTF/PARCS coupled code to simulate large cores with a pin-by-pin level of detail in an acceptable computational time, using for this purpose two control rod drop operational transients that took place in the core of a three-loop pressurized water reactor. As a result, the main safety parameters of the core hot channel have been calculated by the coupled code in a pin level of detail, obtaining best estimate results for this transient.This work has been partially supported by the Universitat Politecnica de Valencia under Projects COBRA_PAR (PAID-05-11-2810) and OpenNUC (PAID-05-12), and by the Spanish Ministerio de Economia y Competitividad under Projects SLEPc-HS (TIN2016-75985-P) and NUC-MULTPHYS (ENE2012-34585).Ramos Peinado, E.; Roman Moltó, JE.; Abarca Giménez, A.; Miró Herrero, R.; Bermejo, JA.; Ortego, A.; Posada-Barral, JM. (2017). Verification of the Parallel Pin-Wise Core Simulator pCTF/PARCSv3.2 in Operational Control Rod Drop Transient Scenarios. Nuclear Science and Engineering. 187(3):254-267. https://doi.org/10.1080/00295639.2017.1320892S2542671873Cuervo, D., Avramova, M., Ivanov, K., & Miró, R. (2006). Evaluation and enhancement of COBRA-TF efficiency for LWR calculations. Annals of Nuclear Energy, 33(9), 837-847. doi:10.1016/j.anucene.2006.03.011Ramos, E., Roman, J. E., Abarca, A., Miró, R., & Bermejo, J. A. (2016). Control rod drop transient analysis with the coupled parallel code pCTF-PARCSv2.7. Annals of Nuclear Energy, 87, 308-317. doi:10.1016/j.anucene.2015.09.016T. DOWNAR et al. “PARCS v2.7 U.S. NRC Core Neutronics Simulator: User Manual” (2006).T. DOWNAR et al. “PARCS v2.7 U.S. NRC Core Neutronics Simulator: Theory Manual” (2006)

    dispel4py: A Python framework for data-intensive scientific computing

    Get PDF
    This paper presents dispel4py, a new Python framework for describing abstract stream-based workflows for distributed data-intensive applications. These combine the familiarity of Python programming with the scalability of workflows. Data streaming is used to gain performance, rapid prototyping and applicability to live observations. dispel4py enables scientists to focus on their scientific goals, avoiding distracting details and retaining flexibility over the computing infrastructure they use. The implementation, therefore, has to map dispel4py abstract workflows optimally onto target platforms chosen dynamically. We present four dispel4py mappings: Apache Storm, message-passing interface (MPI), multi-threading and sequential, showing two major benefits: a) smooth transitions from local development on a laptop to scalable execution for production work, and b) scalable enactment on significantly different distributed computing infrastructures. Three application domains are reported and measurements on multiple infrastructures show the optimisations achieved; they have provided demanding real applications and helped us develop effective training. The dispel4py.org is an open-source project to which we invite participation. The effective mapping of dispel4py onto multiple target infrastructures demonstrates exploitation of data-intensive and high-performance computing (HPC) architectures and consistent scalability.</p

    MPI Forum Procedures Version 1.3

    Get PDF
    Procedures used for the operation of the Message-Passing Interface (MPI) Forum.Ope

    Remote Memory Access Programming in MPI-3

    No full text

    Collectives and Communicators: A Case for Orthogonality

    No full text
    International audienceA major reason for the success of MPI as the standard for large-scale, distributed memory programming is the economy and orthogonality of key concepts. These very design principles suggest leaner and better support for stencil-like, sparse collective communication, while at the same time reducing significantly the number of concrete operation interfaces, extending the functionality that can be supported by high-quality MPI implementations, and provisioning for possible future, much more wide-ranging functionality.As a starting point for discussion, we suggest to (re)define communicators as the sole carriers of the topological structure over processes that determines the semantics of the collective operations, and to limit the functions that can associate topological information with communicators to the functions for distributed graph topology and inter-communicator creation. As a consequence, one set of interfaces for collective communication operations (in blocking, non-blocking, and persistent variants) will suffice, explicitly eliminating the MPI_Neighbor_ interfaces (in all variants) from the MPI standard. Topological structure will not be implied by Cartesian communicators, which in turn will have the sole function of naming processes in a (d-dimensional, Euclidean) geometric space. The geometric naming can be passed to the topology creating functions as part of the communicator, and be used for the process reordering and topological collective algorithm selection.Concretely, at the price of only 1 essential, additional function, our suggestion can remove 10(+1) function interfaces from MPI-3, and 15 (or more) from MPI-4, while providing vastly more optimization scope for the MPI library implementation

    On-line monitoring support in PVM and MPI

    No full text

    A Network-Centric Approach to Embedded Software for Tiny Devices

    No full text
    corecore